perm filename CAREY.1[LET,JMC] blob sn#701684 filedate 1983-02-08 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	@make(letterhead,Phone"497-4330",Who "John McCarthy", Logo Old, Department CSD)                    
C00008 ENDMK
CāŠ—;
@make(letterhead,Phone"497-4330",Who "John McCarthy", Logo Old, Department CSD)                    
@style[indent 5]
@blankspace ( 8 lines)
@begin(address)
Prof. Susan Carey
Dept. of Psychology
MIT
Cambridge, MA  02139
@end(address)
@greeting(Dear Professor Carey:)
@begin (body)

	Thanks for "When Heat and Temperature were One".

	I am interested in examples of confused concepts, because
I believe that all concepts except mathematical ones are confused
to some extent, so intelligent programs must be able to use
confused concepts.  When they discover or are told that they are
confused they should be able to amend them.
The interesting thing about the examples is how far one can go
with a confused concept, and what are the symptoms that tell one
that one is using confused concepts.  The source-recipient model
of the Experimenters is interesting in that regard.

	Incidentally, I don't understand the remark on page 294
"as long as only one substance was involved it did not matter
whether one considered heat to be exchanged in the mixture, or
temperature itself, or some entity with properties of both."
One needn't distinguish between temperature and heat per unit mass,
but if one doesn't have both an extensive and an intensive quantity,
one can't make sense of results obtained when one mixes varying
amounts of a substance at two different temperatures.  Indeed
common sense experience tells us that mixing a small amount of
cold water doesn't change the temperature of some hot water much,
so it is hard for me to understand how making some intensive/extensive
distinction can be avoided as soon as one begins to consider mixing.
How then did they manage to remain confused?

	The example I have been using is a hypothetical law
against attempting to bribe a public official.  For 20 years
people are indicted and some are convicted before lawyers
offer the following defenses for various clients:

	1. You haven't proved my client knew he was a public official;
he might have thought he was a lawyer who could help him fix
his drunk driving conviction. -- de dicto defense.

	2. While my client offered him $5,000 to fix his
drunk driving conviction, it turns out that his term as
Commissioner of motor vehicles had just expired. -- de re defense.

	3. While my client did put an ad in the Criminal Gazette
offering $5,000 to any public official who would fix his drunk
driving conviction, you haven't proved that any public official
could fix the conviction, and the law requires that there be
a specific public official my client was attempting to bribe.
-- I don't know what you call this defense.
@newpage


	From my present AI point of view, the problem isn't to
resolve the ambiguities.  Both philosophers and lawyers have
discussed various ways of doing so.  Rather the problem is to
formalize in logic the state of mind that existed before the
ambiguities were recognized.  Moreover, expressing this state
of mind should not require realizing that there are ambiguities
to be resolved.

	I think I know how to do this.
@end(body)
Sincerely,
 
   
    
John McCarthy
Professor of Computer Science